Avatar

Amy Chang

Head of AI Threat Intelligence and Security Research

AI Software & Platform

Amy Chang is a renowned AI security and cybersecurity expert with almost two decades of practitioner, academic, and government experience. She currently leads the AI Threat Intelligence and Security Research team at Cisco, developing first-in-class AI threat intelligence capabilities to monitor the threat landscape and to build defensive capabilities to secure enterprises from AI risk. Amy's work also focuses on developing scalable and sustainable frameworks for AI security. Amy is also Adjunct Faculty in Cybersecurity & Emerging Threats at Middlebury Institute of International Studies. Prior, Amy also served numerous senior roles at start-ups, corporations, government, military, and non-profits, including as an Executive Director for Global Cybersecurity Operations at JPMorgan Chase, where she led cyber threat intelligence teams and uplifted JPMorgan's intelligence-driven cybersecurity defense. She was formerly a Staff Director in the House Foreign Affairs Committee and worked on Asia policy and legislation. She also served as an officer in the U.S. Navy. Amy is a graduate of Harvard University and Brown University.

Articles

Introducing the Cisco LLM Security Leaderboard: Bringing Transparency to AI Security

4 min read

Today, Cisco launched the LLM Security Leaderboard, a comprehensive resource for evaluating model risk and susceptibility to adversarial attacks. By providing transparent, adversarial evaluation signals, this leaderboard contextualizes model performance metrics against evaluations of how models handle malicious prompts, jailbreak attempts, and other manipulation strategies. The tool empowers organizations with a clear, objective understanding of model risk by mapping threats to our AI Safety and Security Framework taxonomy, and informs defense-in-depth approaches to AI deployments.  

Identifying and remediating a persistent memory compromise in Claude Code

4 min read

We recently discovered a method to compromise Claude Code’s memory and maintain persistence beyond our immediate session into every project, every session, and even after reboots. In this post, we’ll break down how we were able to poison an AI.....

Cisco explores the expanding threat landscape of AI security for 2026 with its latest annual report

3 min read

Thank you to all of the contributors of the State of AI Security 2026, including Amy Chang, Tiffany Saade, Emile Antone, and the broader Cisco AI research team. As artificial intelligence (AI) technology and enterprise AI adoption advance at a rapid pace, the security landscape around it is expanding faster, leaving many defenders struggling to keep […]

AIUC-1 operationalizes Cisco’s AI Security Framework

1 min read

This blog is jointly written by Amy Chang, Hyrum Anderson, Rajiv Dattani, and Rune Kvist. We are excited to announce Cisco as a technical contributor to AIUC-1. The standard will operationalize Cisco’s Integrated AI Security and Safety Framework (AI Security Framework), enabling more secure AI adoption. AI risks are no longer theoretical. We have seen […]

Personal AI Agents like OpenClaw Are a Security Nightmare

4 min read

This blog is written in collaboration by Amy Chang, Vineeth Sai Narajala, and Idan Habler Over the past few weeks, Clawdbot (then renamed Moltbot, later renamed OpenClaw) has achieved virality as an open source, self-hosted personal AI assistant agent that runs locally and executes actions on the user’s behalf. The bot’s explosive rise is driven by […]

Cisco’s MCP Scanner Introduces Behavioral Code Threat Analysis

4 min read

A model context protocol (MCP) tool can claim to execute a benign task such as “validate email addresses,” but if the tool is compromised, it can be redirected to fulfill ulterior motives, such as exfiltrating your entire address book to an external server. Traditional security scanners could flag suspicious network calls or dangerous functions and […]

Introducing Cisco’s Integrated AI Security and Safety Framework

7 min read

The New Baseline for AI Security  AI is no longer an experimental capability or a back-office automation tool: it is becoming a core operational layer inside modern enterprises. The pace of adoption is breathtaking. Yet, according to Cisco’s 2025 AI Readiness Index, only 29 percent of companies believe they are adequately equipped to defend against […]

Death by a Thousand Prompts: Open Model Vulnerability Analysis

6 min read

AI models have become increasingly democratized, and the proliferation and adoption of open weight models has contributed significantly to this reality. Open-weight models provide researchers, developers, and AI enthusiasts with a solid foundation for limitless use cases and applications.  As of August 2025, leading U.S., Chinese, and European models have around 400M total downloads on […]

Dynamic AI Security: How Cisco AI Defense Protects Against New Threats

4 min read

Introduction The pace at which applications for artificial intelligence are evolving continues to impress. Businesses that once considered taking advantage of AI’s sophisticated predictive and natural language capabilities are now evaluating adoption of AI systems that have the ability to access internal data, make complex decisions, and have high levels of autonomy. As we continue […]